skip to main content


Search for: All records

Creators/Authors contains: "Alexander, M."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  2. We propose a novel deterministic method for preparing arbitrary quantum states. When our protocol is compiled into CNOT and arbitrary single-qubit gates, it prepares anN-dimensional state in depthO(log(N))andspacetime allocation(a metric that accounts for the fact that oftentimes some ancilla qubits need not be active for the entire circuit)O(N), which are both optimal. When compiled into the{H,S,T,CNOT}gate set, we show that it requires asymptotically fewer quantum resources than previous methods. Specifically, it prepares an arbitrary state up to errorϵwith optimal depth ofO(log(N)+log(1/ϵ))and spacetime allocationO(Nlog(log(N)/ϵ)), improving overO(log(N)log(log(N)/ϵ))andO(Nlog(N/ϵ)), respectively. We illustrate how the reduced spacetime allocation of our protocol enables rapid preparation of many disjoint states with only constant-factor ancilla overhead –O(N)ancilla qubits are reused efficiently to prepare a product state ofwN-dimensional states in depthO(w+log(N))rather thanO(wlog(N)), achieving effectively constant depth per state. We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations. We provide quantum circuit descriptions of our protocol, detailed pseudocode, and gate-level implementation examples using Braket.

     
    more » « less
    Free, publicly-accessible full text available February 15, 2025
  3. When particle dark matter is bound gravitationally around a massive black hole in sufficiently high densities, the dark matter will affect the rate of inspiral of a secondary compact object that forms a binary with the massive black hole. In this paper, we revisit previous estimates of the impact of dark-matter accretion by black-hole secondaries on the emitted gravitational waves. We identify a region of parameter space of binaries for which estimates of the accretion were too large (specifically, because the dark-matter distribution was assumed to be unchanging throughout the process, and the secondary black hole accreted more mass in dark matter than that enclosed within the orbit of the secondary). To restore consistency in these scenarios, we propose and implement a method to remove dark-matter particles from the distribution function when they are accreted by the secondary. This new feedback procedure then satisfies mass conservation, and when evolved with physically reasonable initial data, the mass accreted by the secondary no longer exceeds the mass enclosed within its orbital radius. Comparing the simulations with accretion feedback to those without this feedback, including feedback leads to a smaller gravitational-wave dephasing from binaries in which only the effects of dynamical friction are being modeled. Nevertheless, the dephasing can be hundreds to almost a thousand gravitational-wave cycles, an amount that should allow the effects of accretion to be inferred from gravitational-wave measurements of these systems. 
    more » « less
    Free, publicly-accessible full text available December 22, 2024
  4. Fentanyl and fentanyl analogs are the main cause of recent overdose deaths in the United States. The presence of fentanyl analogs in illicit drugs makes it difficult to estimate their potencies. This makes the detection and differentiation of fentanyl analogs critically significant. Surface-enhanced Raman spectroscopy (SERS) can differentiate structurally similar fentanyl analogs by yielding spectroscopic fingerprints for the detected molecules. In previous years, five fentanyl analogs, carfentanil, furanyl fentanyl, acetyl fentanyl, 4-fluoroisobutyryl fentanyl (4-FIBF), and cyclopropyl fentanyl (CPrF), gained popularity and were found in 76.4% of the fentanyl analogs trafficked. In this study, we focused on 4-FIBF, CPrF, and structurally similar fentanyl analogs. We developed methods to differentiate these fentanyl analogs using theoretical and experimental methods. To do this, a set of fentanyl analogs were examined using density functional theory (DFT) calculations. The DFT results obtained in this project permitted the assignment of spectral bands. These results were then compared with normal Raman and SERS techniques. Structurally similar fentanyl analogs show important differences in their spectra, and they have been visually differentiated from each other both theoretically and experimentally. Additional results using principal component analysis and soft independent modeling of class analogy show they can be distinguished using this technique. The limit of detection values for FIBF and CPrF were determined to be 0.35 ng/mL and 4.4 ng/mL, respectively, using SERS. Experimental results obtained in this project can be readily implemented in field applications and smaller laboratories, where inexpensive portable Raman spectrometers are often present and used in drug analysis.

     
    more » « less
  5. Abstract

    Convection‐generated gravity waves (CGWs) transport momentum and energy, and this momentum is a dominant driver of global features of Earth's atmosphere's general circulation (e.g., the quasi‐biennial oscillation, the pole‐to‐pole mesospheric circulation). As CGWs are not generally resolved by global weather and climate models, their effects on the circulation need to be parameterized. However, quality observations of GWs are spatiotemporally sparse, limiting understanding and preventing constraints on parameterizations. Convection‐permitting or ‐resolving simulations do generate CGWs, but validation is not possible as these simulations cannot reproduce the CGW‐forcing convection at correct times, locations, and intensities. Here, realistic convective diabatic heating, learned from full‐physics convection‐permitting Weather Research and Forecasting simulations, is predicted from weather radar observations using neural networks and a previously developed look‐up table. These heating rates are then used to force an idealized GW‐resolving dynamical model. Simulated CGWs forced in this way closely resembled those observed by the Atmospheric InfraRed Sounder in the upper stratosphere. CGW drag in these validated simulations extends 100s of kilometers away from the convective sources, highlighting errors in current gravity wave drag parameterizations due to the use of the ubiquitous single‐column approximation. Such validatable simulations have significant potential to be used to further basic understanding of CGWs, improve their parameterizations physically, and provide more restrictive constraints on tuningwith confidence.

     
    more » « less
  6. Free, publicly-accessible full text available August 18, 2024
  7. Free, publicly-accessible full text available June 1, 2024
  8. Free, publicly-accessible full text available July 24, 2024
  9. Over the past several decades, forests worldwide have experienced increases in biotic disturbances caused by insects and plant pathogens – a trend that is expected to continue with climate warming. Whereas the causes and effects of individual biotic disturbances are well studied, spatiotemporal interactions among multiple biotic disturbances are less so, despite their importance to ecosystem function and resilience. Here, we highlight an emerging phenomenon of “hotspots” of biotic disturbances (that is, two or more biotic disturbances that overlap in space and time), documenting trends in recent decades in temperate conifer forests of the western US. We also explore potential mechanisms behind and effects of biotic disturbance hotspots, with particular focus on how altered post‐disturbance recovery (successional pathways) can have profound consequences for ecosystem resilience and biodiversity conservation. Finally, we propose research directions that can elucidate drivers of biotic disturbance hotspots and their ecological effects at various spatial scales, and provide insight into this new knowledge frontier.

     
    more » « less
    Free, publicly-accessible full text available October 1, 2024